responsible artificial intelligence
Everyone Wants Responsible Artificial Intelligence, Few Have It Yet
With great power comes great responsibility. As artificial intelligence continues to gain traction, there has been a rising level of discussion about "responsible AI" (and, closely related, ethical AI). While AI is entrusted to carry more decision-making workloads, it's still based on algorithms that respond to models and data, as I and my co-author Andy Thurai explain in a recent Harvard Business Review article. As a result, AI and often misses the big picture and most times can't analyze the decision with reasoning behind it. It certainly isn't ready to assume human qualities that emphasize empathy, ethics, and morality.
Responsible Artificial Intelligence (RAI) Symposium
Please join Northeastern University on Tuesday, October 18th, for a timely discussion on leveraging Responsible Artificial Intelligence (RAI). This symposium will focus on the competitive advantage of AI, which helps drive sharp business practices across government, private industries, and education. According to Accenture, Responsible AI is the practice of designing, developing, and deploying RAI with good intention to empower employees and businesses, and fairly impact customers and society--allowing companies to engender trust and scale RAI with confidence. There is a growing emphasis on the responsible use of RAI technology, and how this can be used to address, rather than deepen, bias and fairness in society. Northeastern University is focused on educating its students and community on how to tackle these issues.
What Is Responsible Artificial Intelligence?
Artificial Intelligence (AI) has revolutionized the way we live. Along with the growing influence of algorithms in how business is organized, these new technologies are impacting our personal decisions regarding where we travel, what we buy, read, or which music we listen to. Given AI's prevalence as an increasingly powerful technology, it is important that we trust it to be a source of good for our society. Yet, the issue of inherent bias and discrimination present in the data built into AI has been widely documented. Experts from Warwick Business School (WBS) have been working on finding the source of such bias and how to minimize it.
Intel and Mila join forces for responsible artificial intelligence
Intel yesterday announced a three-year strategic research and co-innovation collaboration with Mila, a Montreal-based artificial intelligence (AI) research institute. As part of this collaboration, more than 20 researchers from Intel and Mila will work on the development of advanced AI techniques to address global challenges such as climate change, the discovery of new materials, and digital biology. "Accelerating research and development of advanced AI to solve some of the world's most critical and challenging problems requires a responsible approach to AI, and the ability to scale computing technology," the partners said in a statement. "As leaders in computing and AI, and with alignment on being a positive, powerful agent of change in our world, Intel and Mila will be able to double down on projects started in 2021, add a third track and significantly increases the support to drive tangible results." "In the face of current global challenges, we must push for a culture of open science between academia and industry to successfully advance AI applications for the benefit of society," said Yoshua Bengio, Founder and Chief Scientific Officer of Mila.
Responsible artificial intelligence is good business
There is increasing evidence of the business benefits of responsible AI (RAI), when companies mitigate risks through training and testing data, measuring model bias and accuracy, and model documentation. Companies that adopt responsible AI experience higher returns on their AI investment. Raj Shekhar writes that business leaders globally must coalesce around the imperative to develop rigorous, consistent standards for responsible AI adoption. Much has been spoken and written about the risks to public trust and safety arising from the adoption of artificial intelligence (AI)-based applications across multiple sectors. In finance, the use of AI has led to discriminatory credit decisions.
- North America > United States (0.72)
- Asia > India (0.05)
Pentagon names new chief of responsible artificial intelligence
The Pentagon has tapped artificial intelligence ethics and research expert Diane Staheli to lead the Responsible AI (RAI) Division of its new Chief Digital and AI Office (CDAO), FedScoop confirmed on Tuesday. In this role, Staheli will help steer the Defense Department's development and application of policies, practices, standards and metrics for buying and building AI that is trustworthy and accountable. She enters the position nearly nine months after DOD's first AI ethics lead exited the Joint Artificial Intelligence Center (JAIC), and in the midst of a broad restructuring of the Pentagon's main AI-associated components under the CDAO. "[Staheli] has significant experience in military-oriented research and development environments, and is a contributing member of the Office of the Director of National Intelligence AI Assurance working group," Sarah Flaherty, CDAO's public affairs officer, told FedScoop. Advanced computer-driven systems use AI to perform tasks that generally require some human intelligence.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Responsible Artificial Intelligence Is Still Out Of Reach For Many Organizations
Time for AI proponents to step up.. There's strong support for analytics and data science and the capabilities it offers organizations. However, the people charged with developing analytics and artificial intelligence feel resistance from business executives in getting fully on board with data-driven practices. In addition, efforts to ensure fairness in AI are lagging. That's the word from a recent study of 277 data managers and scientists out of SAS, which finds that overall, more than two-thirds were satisfied with the outcomes from their analytical projects. At the same time, 42% say data science results are not used by business decision makers, making it one of the main barriers faced.
Responsible Artificial Intelligence in Workforce Recruiting
With mission-critical operations, artificial intelligence (AI) has the potential to produce incredible benefits – not only for businesses but also for the people they serve and employ. You see it when systems detect fraudulent purchases and keep a consumer's account safe. It's in autonomous and self-driving cars, which are programmed to help keep drivers safe and avoid collisions. In each of these examples, AI is a tool to learn complex patterns – including some that are practically undetectable. The result is more impactful and, with appropriate oversight, better and fairer decision-making.
- Law (0.98)
- Information Technology > Security & Privacy (0.78)
Artificial intelligence carries a huge upside. But potential harms need to be managed
Artificial intelligence and machine learning have the potential to contribute to the resolution of some of the most intractable problems of our time. Examples include climate change and pandemics. But they have the capacity to cause harm too. And they can, if not used properly, perpetuate historical injustices and structural inequalities. To mitigate against their potential harms, the world needs frameworks for the governance of data that are economically enabling and that preserve rights.
- Information Technology > Security & Privacy (0.99)
- Law (0.75)